109 research outputs found

    Design and modeling of superconducting hardware for implementing quantum stabilizers

    Get PDF
    Superconducting qubits are one of the leading systems for implementing quantum processors. Realizing fault-tolerant quantum computation requires some form of quantum error correction, which typically involves performing repeated stabilizer operations on groups of physical qubits in an array to form a logical qubit with enhanced protection against errors. Realizing a logical qubit that is suitable for running quantum algorithms requires an array with a significant number of physical qubits, which is extremely challenging. However, the physical qubit overhead can be reduced by lowering the error rate on the physical qubits. Current state-of-the-art superconducting qubit designs do not have robust protection against all types of errors. Reducing error rates on these conventional qubits requires further advances in fabrication, materials, and device packaging to reduce the noise and perturbations coupled to the qubit. Another approach to reduce the error rates is to develop new qubit designs that have intrinsic protection against all types of errors. The charge-parity qubit is one such design. Conventional superconducting qubits are based on Josephson junctions, which have a 2π2\pi-periodic dependence on the superconducting phase difference across the junction. The charge-parity qubit is formed from a chain of plaquettes, each of which behaves as a π\pi-periodic Josephson element. For appropriate parameters, the effective coupling Hamiltonian between plaquettes in a charge-parity qubit is equivalent to the implementation of a quantum stabilizer in superconducting hardware. In this thesis, I present the experimental realization of plaquette-chain devices that exhibit such stabilizer behavior. The plaquette devices are fabricated with arrays of Josephson junctions, with multiple on-chip flux- and charge-bias lines for local biasing of the various device elements. Microwave spectroscopy measurements allow for a characterization of the transitions between the different energy levels of the plaquette chain and their dispersion with flux and charge bias of the various device elements. Extensive numerical modeling of the energy-level structure and comparison with the measured transition spectra indicates that the device exhibits protection against local noise. This work paves the way for future qubits based on this design with optimized parameters and implementations that are capable of achieving dramatic reductions in error rates beyond the current state of the art

    Structure from Recurrent Motion: From Rigidity to Recurrency

    Full text link
    This paper proposes a new method for Non-Rigid Structure-from-Motion (NRSfM) from a long monocular video sequence observing a non-rigid object performing recurrent and possibly repetitive dynamic action. Departing from the traditional idea of using linear low-order or lowrank shape model for the task of NRSfM, our method exploits the property of shape recurrency (i.e., many deforming shapes tend to repeat themselves in time). We show that recurrency is in fact a generalized rigidity. Based on this, we reduce NRSfM problems to rigid ones provided that certain recurrency condition is satisfied. Given such a reduction, standard rigid-SfM techniques are directly applicable (without any change) to the reconstruction of non-rigid dynamic shapes. To implement this idea as a practical approach, this paper develops efficient algorithms for automatic recurrency detection, as well as camera view clustering via a rigidity-check. Experiments on both simulated sequences and real data demonstrate the effectiveness of the method. Since this paper offers a novel perspective on rethinking structure-from-motion, we hope it will inspire other new problems in the field.Comment: To appear in CVPR 201

    High-Fidelity 3D Head Avatars Reconstruction through Spatially-Varying Expression Conditioned Neural Radiance Field

    Full text link
    One crucial aspect of 3D head avatar reconstruction lies in the details of facial expressions. Although recent NeRF-based photo-realistic 3D head avatar methods achieve high-quality avatar rendering, they still encounter challenges retaining intricate facial expression details because they overlook the potential of specific expression variations at different spatial positions when conditioning the radiance field. Motivated by this observation, we introduce a novel Spatially-Varying Expression (SVE) conditioning. The SVE can be obtained by a simple MLP-based generation network, encompassing both spatial positional features and global expression information. Benefiting from rich and diverse information of the SVE at different positions, the proposed SVE-conditioned neural radiance field can deal with intricate facial expressions and achieve realistic rendering and geometry details of high-fidelity 3D head avatars. Additionally, to further elevate the geometric and rendering quality, we introduce a new coarse-to-fine training strategy, including a geometry initialization strategy at the coarse stage and an adaptive importance sampling strategy at the fine stage. Extensive experiments indicate that our method outperforms other state-of-the-art (SOTA) methods in rendering and geometry quality on mobile phone-collected and public datasets.Comment: 9 pages, 5 figure

    Tensor4D : Efficient Neural 4D Decomposition for High-fidelity Dynamic Reconstruction and Rendering

    Full text link
    We present Tensor4D, an efficient yet effective approach to dynamic scene modeling. The key of our solution is an efficient 4D tensor decomposition method so that the dynamic scene can be directly represented as a 4D spatio-temporal tensor. To tackle the accompanying memory issue, we decompose the 4D tensor hierarchically by projecting it first into three time-aware volumes and then nine compact feature planes. In this way, spatial information over time can be simultaneously captured in a compact and memory-efficient manner. When applying Tensor4D for dynamic scene reconstruction and rendering, we further factorize the 4D fields to different scales in the sense that structural motions and dynamic detailed changes can be learned from coarse to fine. The effectiveness of our method is validated on both synthetic and real-world scenes. Extensive experiments show that our method is able to achieve high-quality dynamic reconstruction and rendering from sparse-view camera rigs or even a monocular camera. The code and dataset will be released at https://liuyebin.com/tensor4d/tensor4d.html

    Learning Implicit Templates for Point-Based Clothed Human Modeling

    Full text link
    We present FITE, a First-Implicit-Then-Explicit framework for modeling human avatars in clothing. Our framework first learns implicit surface templates representing the coarse clothing topology, and then employs the templates to guide the generation of point sets which further capture pose-dependent clothing deformations such as wrinkles. Our pipeline incorporates the merits of both implicit and explicit representations, namely, the ability to handle varying topology and the ability to efficiently capture fine details. We also propose diffused skinning to facilitate template training especially for loose clothing, and projection-based pose-encoding to extract pose information from mesh templates without predefined UV map or connectivity. Our code is publicly available at https://github.com/jsnln/fite.Comment: Accepted to ECCV 202
    • …
    corecore